Convergence of a Recurrent Neural Network for Nonconvex Optimization Based on an Augmented Lagrangian Function

نویسندگان

  • Xiaolin Hu
  • Jun Wang
چکیده

In the paper, a recurrent neural network based on an augmented Lagrangian function is proposed for seeking local minima of nonconvex optimization problems with inequality constraints. First, each equilibrium point of the neural network corresponds to a Karush-KuhnTucker (KKT) point of the problem. Second, by appropriately choosing a control parameter, the neural network is asymptotically stable at those local minima satisfying some mild conditions. The latter property of the neural network is ensured by the convexification capability of the augmented Lagrangian function. The proposed scheme is inspired by many existing neural networks in the literature and can be regarded as an extension or improved version of them. A simulation example is discussed to illustrate the results.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Efficient Neurodynamic Scheme for Solving a Class of Nonconvex Nonlinear Optimization Problems

‎By p-power (or partial p-power) transformation‎, ‎the Lagrangian function in nonconvex optimization problem becomes locally convex‎. ‎In this paper‎, ‎we present a neural network based on an NCP function for solving the nonconvex optimization problem‎. An important feature of this neural network is the one-to-one correspondence between its equilibria and KKT points of the nonconvex optimizatio...

متن کامل

An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems

Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...

متن کامل

Using Three Layer Neural Network to Compute Multi-valued Functions

A novel global hybrid algorithm for feedforward neural networks p. 9 Study on relationship between NIHSS and TCM-SSASD based on the BP neural network multiple models method p. 17 Application of back-propagation neural network to power transformer insulation diagnosis p. 26 Momentum BP neural networks in structural damage detection based on static displacements and natural frequencies p. 35 Defo...

متن کامل

An Augmented Lagrangian Based Algorithm for Distributed NonConvex Optimization

This paper is about distributed derivative-based algorithms for solving optimization problems with a separable (potentially nonconvex) objective function and coupled affine constraints. A parallelizable method is proposed that combines ideas from the fields of sequential quadratic programming and augmented Lagrangian algorithms. The method negotiates shared dual variables that may be interprete...

متن کامل

Multiplier Algorithm Based on A New Augmented Lagrangian Function

In this paper, for nonconvex optimization problem with both equality and inequality constrains, we introduce a new augmented Lagrangian function and propose the corresponding multiplier algorithm. The global convergence is established without requiring the boundedness of multiplier sequences. In particular, if the algorithm terminates in finite steps, then we obtain a KKT point of the primal pr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007